This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments. In this paper, we treat the MOT task as a two-stage task including human detection and trajectory matching. Specifically, we designed an improved human detector and associated most of detection to guarantee the integrity of the motion trajectory. We also propose a location-wise matching matrix to obtain more accurate trace matching. Without any model merging, our method achieves 66.672 HOTA and 93.971 MOTA on the DanceTrack challenge dataset.
translated by 谷歌翻译
Recent advances on text-to-image generation have witnessed the rise of diffusion models which act as powerful generative models. Nevertheless, it is not trivial to exploit such latent variable models to capture the dependency among discrete words and meanwhile pursue complex visual-language alignment in image captioning. In this paper, we break the deeply rooted conventions in learning Transformer-based encoder-decoder, and propose a new diffusion model based paradigm tailored for image captioning, namely Semantic-Conditional Diffusion Networks (SCD-Net). Technically, for each input image, we first search the semantically relevant sentences via cross-modal retrieval model to convey the comprehensive semantic information. The rich semantics are further regarded as semantic prior to trigger the learning of Diffusion Transformer, which produces the output sentence in a diffusion process. In SCD-Net, multiple Diffusion Transformer structures are stacked to progressively strengthen the output sentence with better visional-language alignment and linguistical coherence in a cascaded manner. Furthermore, to stabilize the diffusion process, a new self-critical sequence training strategy is designed to guide the learning of SCD-Net with the knowledge of a standard autoregressive Transformer model. Extensive experiments on COCO dataset demonstrate the promising potential of using diffusion models in the challenging image captioning task. Source code is available at \url{https://github.com/YehLi/xmodaler/tree/master/configs/image_caption/scdnet}.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.
translated by 谷歌翻译
多模式变压器的最新努力通过合并视觉和文本信息改善了视觉上丰富的文档理解(VRDU)任务。但是,现有的方法主要集中于诸如单词和文档图像贴片之类的细粒元素,这使得他们很难从粗粒元素中学习,包括短语和显着视觉区域(如突出的图像区域)等自然词汇单元。在本文中,我们对包含高密度信息和一致语义的粗粒元素更为重要,这对于文档理解很有价值。首先,提出了文档图来模拟多层次多模式元素之间的复杂关系,其中通过基于群集的方法检测到显着的视觉区域。然后,提出了一种称为mmlayout的多模式变压器,以将粗粒的信息纳入基于图形的现有预训练的细颗粒的多峰变压器中。在mmlayout中,粗粒信息是从细粒度聚集的,然后在进一步处理后,将其融合到细粒度中以进行最终预测。此外,引入常识增强以利用天然词汇单元的语义信息。关于四个任务的实验结果,包括信息提取和文档问答,表明我们的方法可以根据细粒元素改善多模式变压器的性能,并使用更少的参数实现更好的性能。定性分析表明,我们的方法可以在粗粒元素中捕获一致的语义。
translated by 谷歌翻译
当前的图形神经网络(GNNS)遇到了过度光滑的问题,这导致无法区分的节点表示和较低的模型性能,并具有更多的GNN层。近年来已经提出了许多方法来解决这个问题。但是,现有的解决过度平滑的方法强调模型性能并忽略节点表示的过度平滑度。一次采用另外一种方法,同时缺乏整体框架​​来共同利用多个解决方案来解决过度光滑的挑战。为了解决这些问题,我们提出了Grato,这是一个基于神经体系结构搜索的框架,以自动搜索GNNS体系结构。 Grato采用新颖的损失功能,以促进模型性能和表示平滑度之间的平衡。除了现有方法外,我们的搜索空间还包括DropAttribute,这是一种减轻过度光滑挑战的新计划,以充分利用各种解决方案。我们在六个现实世界数据集上进行了广泛的实验,以评估Grato,这表明Grato在过度平滑的指标中的表现优于基准,并在准确性方面取得了竞争性能。 Grato在GNN层数量增加的情况下特别有效且健壮。进一步的实验确定了通过grato学习的节点表示的质量和模型架构的有效性。我们在Github(\ url {https://github.com/fxsxjtu/grato})上提供Grato的CIDE。
translated by 谷歌翻译
Twitter机器人检测是一项重要且有意义的任务。现有的基于文本的方法可以深入分析用户推文内容,从而实现高性能。但是,新颖的Twitter机器人通过窃取真正的用户的推文并用良性推文稀释恶意内容来逃避这些检测。这些新颖的机器人被认为以语义不一致的特征。此外,最近出现了利用Twitter图结构的方法,显示出巨大的竞争力。但是,几乎没有一种方法使文本和图形模式深入融合并进行了交互,以利用优势并了解两种方式的相对重要性。在本文中,我们提出了一个名为BIC的新型模型,该模型使文本和图形模式深入互动并检测到推文语义不一致。具体而言,BIC包含一个文本传播模块,一个图形传播模块,可分别在文本和图形结构上进行机器人检测,以及可证明有效的文本互动模块,以使两者相互作用。此外,BIC还包含一个语义一致性检测模块,以从推文中学习语义一致性信息。广泛的实验表明,我们的框架在全面的Twitter机器人基准上优于竞争基准。我们还证明了拟议的相互作用和语义一致性检测的有效性。
translated by 谷歌翻译
由于其在许多有影响力的领域中的广泛应用,归因网络上的图形异常检测已成为普遍的研究主题。在现实情况下,属性网络中的节点和边缘通常显示出不同的异质性,即不同类型的节点的属性显示出大量的多样性,不同类型的关系表示多种含义。在这些网络中,异常在异质性的各个角度上的表现通常与大多数不同。但是,现有的图异常检测方法不能利用归因网络中的异质性,这与异常检测高度相关。鉴于这个问题,我们提出了前方的提议:基于编码器解码器框架的异质性无监督图异常检测方法。具体而言,对于编码器,我们设计了三个关注级别,即属性级别,节点类型级别和边缘级别的关注,以捕获网络结构的异质性,节点属性和单个节点的信息。在解码器中,我们利用结构,属性和节点类型重建项来获得每个节点的异常得分。广泛的实验表明,与无监督环境中的艺术品相比,在几个现实世界中的异质信息网络上,前方的优势。进一步的实验验证了我们三重注意力,模型骨干和解码器的有效性和鲁棒性。
translated by 谷歌翻译
知识图嵌入(KGE)旨在将实体和关系映射到低维空间,并成为知识图完成的\ textit {de-facto}标准。大多数现有的KGE方法都受到稀疏挑战的困扰,在这种挑战中,很难预测在知识图中频繁的实体。在这项工作中,我们提出了一个新颖的框架KRACL,以减轻具有图表和对比度学习的KG中广泛的稀疏性。首先,我们建议知识关系网络(KRAT)通过同时将相邻的三元组投射到不同的潜在空间,并通过注意机制共同汇总信息来利用图形上下文。 KRAT能够捕获不同上下文三联的微妙的语义信息和重要性,并利用知识图中的多跳信息。其次,我们通过将对比度损失与跨熵损失相结合,提出知识对比损失,这引入了更多的负样本,从而丰富了对稀疏实体的反馈。我们的实验表明,KRACL在各种标准知识基准中取得了卓越的结果,尤其是在WN18RR和NELL-995上,具有大量低级内实体。广泛的实验还具有KRACL在处理稀疏知识图和鲁棒性三元组的鲁棒性方面的有效性。
translated by 谷歌翻译
现代有效的卷积神经网络(CNN)始终使用可分开的卷积(DSC)和神经体系结构搜索(NAS)来减少参数数量和计算复杂性。但是网络的一些固有特征被忽略了。受到可视化功能地图和n $ \ times $ n(n $> $ 1)卷积内核的启发,本文介绍了几种准则,以进一步提高参数效率和推理速度。基于这些准则,我们的参数有效的CNN体​​系结构称为\ textit {vgnetg},比以前的网络更高的准确性和延迟较低,降低了约30%$ \厚度$ 50%的参数。我们的VGNETG-1.0MP在ImageNet分类数据集上具有0.99万参数的67.7%TOP-1准确性和69.2%的TOP-1精度,而参数为114m。此外,我们证明边缘检测器可以通过用固定的边缘检测核代替N $ \ times $ n内核来代替可学习的深度卷积层来混合特征。我们的VGNETF-1.5MP存档64.4%( - 3.2%)的TOP-1准确性和66.2%(-1.4%)的TOP-1准确性,具有额外的高斯内核。
translated by 谷歌翻译